Monday, June 16, 2025

Rejecting blind builder and helpless witness narratives in favor of constitutive narratives

I want to pass on this concise ChatGP4o summary of a recent piece by Venkatesh Rao titled "Not Just a Camera, Not Just an Engine":

The author critiques two dominant narrative styles shaping our understanding of current events:

  1. Blind builder narratives, which enthusiastically act without deeply understanding the world, and

  2. Helpless witness narratives, which see and interpret richly but lack agency to act.

Both are seen as inadequate. The author proposes a third stance: “camera-engine” narratives, or constitutive narratives, which combine seeing and doing—observing reality while simultaneously reshaping it. These narratives are not just descriptive but performative, akin to legal speech-acts that create new realities (e.g., a judge declaring a couple married).

This concept implies that meaningful engagement with the world requires transcending the passive/active divide. Seeing and doing must occur in a tightly entangled loop, like a double helix, where observation changes what is, and action reveals what could be.

People and institutions that fail to integrate seeing and doing—whether Silicon Valley “doers” or intellectual “seers”—become ghost-like: agents of entropy whose actions are ultimately inconsequential or destructive. Their narratives can be ignored, even if their effects must be reckoned with.

To escape this ghosthood, one must use camera-engine media—tools or practices that force simultaneous perception and transformation. Examples include:

  • Legal systems, protocols, AI tools, and code-as-law, which inherently see and alter reality.

  • In contrast, “camera theaters” (e.g., hollow rhetoric) and “engine theaters” (e.g., performative protests) simulate action or vision but are ultimately ineffective.

The author admits to still learning how best to wield camera-engine media but has developed a growing ability to detect when others are stuck in degenerate forms—ghosts mistaking themselves for real actors.

 


Saturday, June 14, 2025

AI ‘The Illusion of Thinking’

  I want to pass on this interesting piece by Christopher Mims in todays Wall Street Journal:

A primary requirement for being a leader in AI these days is to be a herald of the impending arrival of our digital messiah: superintelligent AI. For Dario Amodei of Anthropic, Demis Hassabis of Google and Sam Altman of OpenAI, it isn’t enough to claim that their AI is the best. All three have recently insisted that it’s going to be so good, it will change the very fabric of society.
Even Meta—whose chief AI scientist has been famously dismissive of this talk—wants in on the action. The company confirmed it is spending $14 billion to bring in a new leader for its AI efforts who can realize Mark Zuckerberg’s dream of AI superintelligence— that is, an AI smarter than we are. “Humanity is close to building digital superintelligence,” Altman declared in an essay this past week, and this will lead to “whole classes of jobs going away” as well as “a new social contract.” Both will be consequences of AI-powered chatbots taking over whitecollar jobs, while AI-powered robots assume the physical ones.
Before you get nervous about all the times you were rude to Alexa, know this: A growing cohort of researchers who build, study and use modern AI aren’t buying all that talk.
The title of a fresh paper from Apple says it all: “The Illusion of Thinking.” In it, a half-dozen top researchers probed reasoning models—large language models that “think” about problems longer, across many steps—from the leading AI labs, including OpenAI, DeepSeek and Anthropic. They found little evidence that these are capable of reasoning anywhere close to the level their makers claim.
Generative AI can be quite useful in specific applications, and a boon to worker productivity. OpenAI claims 500 million monthly active ChatGPT users. But these critics argue there is a hazard in overestimating what it can do, and making business plans, policy decisions and investments based on pronouncements that seem increasingly disconnected from the products themselves.
Apple’s paper builds on previous work from many of the same engineers, as well as notable research from both academia and other big tech companies, including Salesforce. These experiments show that today’s “reasoning” AIs—hailed as the next step toward autonomous AI agents and, ultimately, superhuman intelligence— are in some cases worse at solving problems than the plainvanilla AI chatbots that preceded them. This work also shows that whether you’re using an AI chatbot or a reasoning model, all systems fail at more complex tasks.
Apple’s researchers found “fundamental limitations” in the models. When taking on tasks beyond a certain level of complexity, these AIs suffered “complete accuracy collapse.” Similarly, engineers at Salesforce AI Research concluded that their results “underscore a significant gap between current LLM capabilities and real-world enterprise demands.”
The problems these state-ofthe- art AIs couldn’t handle are logic puzzles that even a precocious child could solve, with a little instruction. What’s more, when you give these AIs that same kind of instruction, they can’t follow it.
Apple’s paper has set off a debate in tech’s halls of power—Signal chats, Substack posts and X threads— pitting AI maximalists against skeptics.
“People could say it’s sour grapes, that Apple is just complaining because they don’t have a cutting-edge model,” says Josh Wolfe, co-founder of venture firm Lux Capital. “But I don’t think it’s a criticism so much as an empirical observation.”
The reasoning methods in OpenAI’s models are “already laying the foundation for agents that can use tools, make decisions, and solve harder problems,” says an OpenAI spokesman. “We’re continuing to push those capabilities forward.”
The debate over this research begins with the implication that today’s AIs aren’t thinking, but instead are creating a kind of spaghetti of simple rules to follow in every situation covered by their training data.
Gary Marcus, a cognitive scientist who sold an AI startup to Uber in 2016, argued in an essay that Apple’s paper, along with related work, exposes flaws in today’s reasoning models, suggesting they’re not the dawn of human-level ability but rather a dead end. “Part of the reason the Apple study landed so strongly is that Apple did it,” he says. “And I think they did it at a moment in time when people have finally started to understand this for themselves.”
In areas other than coding and mathematics, the latest models aren’t getting better at the rate they once did. And the newest reasoning models actually hallucinate more than their predecessors.
“The broad idea that reasoning and intelligence come with greater scale of models is probably false,” says Jorge Ortiz, an associate professor of engineering at Rutgers, whose lab uses reasoning models and other AI to sense real-world environments. Today’s models have inherent limitations that make them bad at following explicit instructions—not what you’d expect from a computer.
It’s as if the industry is creating engines of free association. They’re skilled at confabulation, but we’re asking them to take on the roles of consistent, rule- following engineers or accountants.
That said, even those who are critical of today’s AIs hasten to add that the march toward morecapable AI continues.
Exposing current limitations could point the way to overcoming them, says Ortiz. For example, new training methods—giving step-by-step feedback on models’ performance, adding more resources when they encounter harder problems—could help AI work through bigger problems, and make better use of conventional software.
From a business perspective, whether or not current systems can reason, they’re going to generate value for users, says Wolfe.
“Models keep getting better, and new approaches to AI are being developed all the time, so I wouldn’t be surprised if these limitations are overcome in practice in the near future,” says Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania, who has studied the practical uses of AI.
Meanwhile, the true believers are undeterred.
Just a decade from now, Altman wrote in his essay, “maybe we will go from solving high-energy physics one year to beginning space colonization the next year.” Those willing to “plug in” to AI with direct, brain-computer interfaces will see their lives profoundly altered, he adds.
This kind of rhetoric accelerates AI adoption in every corner of our society. AI is now being used by DOGE to restructure our government, leveraged by militaries to become more lethal, and entrusted with the education of our children, often with unknown consequences.
Which means that one of the biggest dangers of AI is that we overestimate its abilities, trust it more than we should—even as it’s shown itself to have antisocial tendencies such as “opportunistic blackmail”—and rely on it more than is wise. In so doing, we make ourselves vulnerable to its propensity to fail when it matters most.
“Although you can use AI to generate a lot of ideas, they still require quite a bit of auditing,” says Ortiz. “So for example, if you want to do your taxes, you’d want to stick with something more like TurboTax than ChatGPT.”

Friday, June 06, 2025

Benefits and dangers of anthropomorphic conversational agents

Peter et al. offer an interesting open source essay. They ask:
"should we lean into the human-like abilities, or should we aim to dehumanize LLM-based systems, given concerns over anthropomorphic seduction? When users cannot tell the difference between human interlocutors and AI systems, threats emerge of deception, manipulation, and disinformation at scale."

Friday, May 30, 2025

Socially sensitive autonomous vehicles?

Driving around in the Old West Austin neighborhood where I live I am increasingly spooked (the uncanny valley effect) at four-way stop signs when one of the vehicles waiting its turn is an autonomous vehicle (AV) - usually the google waymo self driving car which had had a testing period in my area.) Thus my eye was caught by a recent relevant article by Meixin Zhu et al. whose reading also creeped me out a bit. (Title: "Empowering safer socially sensitive autonomous vehicles using human-plausible cognitive encoding") Here is the abstract:

Autonomous vehicles (AVs) will soon cruise our roads as a global undertaking. Beyond completing driving tasks, AVs are expected to incorporate ethical considerations into their operation. However, a critical challenge remains. When multiple road users are involved, their impacts on AV ethical decision-making are distinct yet interrelated. Current AVs lack social sensitivity in ethical decisions, failing to enable both differentiated consideration of road users and a holistic view of their collective impact. Drawing on research in AV ethics and neuroscience, we propose a scheme based on social concern and human-plausible cognitive encoding. Specifically, we first assess the individual impact that each road user poses to the AV based on risk. Then, social concern can differentiate these impacts by weighting the risks according to road user categories. Through cognitive encoding, these independent impacts are holistically encoded into a behavioral belief, which in turn supports ethical decisions that consider the collective impact of all involved parties. A total of two thousand benchmark scenarios from CommonRoad are used for evaluation. Empirical results show that our scheme enables safer and more ethical decisions, reducing overall risk by 26.3%, with a notable 22.9% decrease for vulnerable road users. In accidents, we enhance self-protection by 8.3%, improve protection for all road users by 17.6%, and significantly boost protection for vulnerable road users by 51.7%. As a human-inspired practice, this work renders AVs socially sensitive to overcome future ethical challenges in everyday driving.

Wednesday, May 28, 2025

Energetics and evolutionary fitness

I pass on the first paragraph of a perspectives piece in PNAS by Vermeij et al. that gives their message more thoroughly than the paper's abstract. Motivated readers can obtain a copy of the whole essay from me. 

Organisms acquire energy and material resources and convert these to activity and living biomass (1). The role of energy as currency (or power, energy per unit time) in evolution has long been recognized (2–6), but how energy acquisition and allocation affect evolution remains the subject of disagreement. In this perspective, we show how different assumptions about whether life operates in a dynamic steady state or whether it has expanded over the course of its history lead to contrasting predictions about adaptation, natural selection, and “fitness.” We conclude that models based on steady-state assumptions do not adequately account for observed patterns of adaptive change and evolutionary trends of increasing power and species richness over long periods of time, whereas models based on individual and collective power, which incorporate activity and the effects of organisms on their surroundings as components of survival and reproduction, reflect the history of adaptation more faithfully. The issue is important because energy (the currency of life) and power (energy acquired and expended per unit time) offer a unified framework for interpreting the course and outcomes of evolution. Models based on assumptions that reflect observed patterns should be more predictive than zero-sum models not only in the realm of evolution but also in ecology and economics.

Monday, May 26, 2025

Evidence of a social evaluation penalty for using AI

 From Reif et al. (open source), systematic observations that confirm my own experience,

Significance

As AI tools become increasingly prevalent in workplaces, understanding the social dynamics of AI adoption is crucial. Through four experiments with over 4,400 participants, we reveal a social penalty for AI use: Individuals who use AI tools face negative judgments about their competence and motivation from others. These judgments manifest as both anticipated and actual social penalties, creating a paradox where productivity-enhancing AI tools can simultaneously improve performance and damage one’s professional reputation. Our findings identify a potential barrier to AI adoption and highlight how social perceptions may reduce the acceptance of helpful technologies in the workplace.

Abstract

Despite the rapid proliferation of AI tools, we know little about how people who use them are perceived by others. Drawing on theories of attribution and impression management, we propose that people believe they will be evaluated negatively by others for using AI tools and that this belief is justified. We examine these predictions in four preregistered experiments (N = 4,439) and find that people who use AI at work anticipate and receive negative evaluations regarding their competence and motivation. Further, we find evidence that these social evaluations affect assessments of job candidates. Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs.

 

Friday, May 23, 2025

A new route towards dystopia:? Sonifying tactile interactions and their underlying emotions to allow ‘social touch.’

Our Tech-World overlords may be using work like the following from de Lagarde et al. to find ways for us to avoid requiring the evolved succor of human touch and survive only in the company of audiovisual feeds and android companions.  As an antidote to social isolation, however,  perhaps it is better than nothing.

Social touch is crucial for human well-being, as a lack of tactile interactions increases anxiety, loneliness, and need for social support. To address the detrimental effects of social isolation, we build on cutting-edge research on social touch and movement sonification to investigate whether social tactile gestures could be recognized through sounds, a sensory channel giving access to remote information. Four online experiments investigated participants’ perception of auditory stimuli that were recorded with our “audio-touch” sonification technique, which captures the sounds of touch. In the first experiment, participants correctly categorized sonified skin-on-skin tactile gestures (i.e., stroking, rubbing, tapping, hitting). In the second experiment, the audio-touch sample consisted of the sonification of six socio-emotional intentions conveyed through touch (i.e., anger, attention, fear, joy, love, sympathy). Participants categorized above chance the socio-emotional intentions of skin-on-skin touches converted into sounds and coherently rated their valence. In two additional experiments, the surface involved in the touches (either skin or plastic) was shown to influence participants’ recognition of sonified gestures and socio-emotional intentions. Thus, our research unveils that specific information about social touch (i.e., gesture, emotions, and surface) can be recognized through sounds, when they are obtained with our specific sonifying methodology. This shows significant promise for providing remote access, through the auditory channel, to meaningful social touch interactions.

Wednesday, May 21, 2025

Why does AI hinder democratization?

Here is the abstract from the open source article of Chu et al in PNAS: 

This paper examines the relationship between democratization and the development of AI and information and communication technology (ICT). Our empirical evidence shows that in the past 10 y, the advancement of AI/ICT has hindered the development of democracy in many countries around the world. Given that both the state rulers and civil society groups can use AI/ICT, the key that determines which side would benefit more from the advancement of these technologies hinges upon “technology complementarity.” In general, AI/ICT would be more complementary to the government rulers because they are more likely than civil society groups to access various administrative big data. Empirically, we propose three hypotheses and use statistical tests to verify our argument. Theoretically, we prove a proposition, showing that when the above-mentioned complementarity assumption is true, the AI/ICT advancements would enable rulers in authoritarian and fragile democratic countries to achieve better control over civil society forces, which leads to the erosion of democracy. Our analysis explains the recent ominous development in some fragile-democracy countries

Monday, May 19, 2025

AI is not your friend.

I want to pass on clips from Mike Caulfield's piece in The Atlantic on how "opinionated" chatbots destroy AI's potential, and how this can be fixed:

Recently, after an update that was supposed to make ChatGPT “better at guiding conversations toward productive outcomes,” according to release notes from OpenAI, the bot couldn’t stop telling users how brilliant their bad ideas were. ChatGPT reportedly told one person that their plan to sell literal “shit on a stick” was “not just smart—it’s genius.”
Many more examples cropped up, and OpenAI rolled back the product in response, explaining in a blog post that “the update we removed was overly flattering or agreeable—often described as sycophantic.” The company added that the chatbot’s system would be refined and new guardrails would be put into place to avoid “uncomfortable, unsettling” interactions.
But this was not just a ChatGPT problem. Sycophancy is a common feature of chatbots: A 2023 paper by researchers from Anthropic found that it was a “general behavior of state-of-the-art AI assistants,” and that large language models sometimes sacrifice “truthfulness” to align with a user’s views. Many researchers see this phenomenon as a direct result of the “training” phase of these systems, where humans rate a model’s responses to fine-tune the program’s behavior. The bot sees that its evaluators react more favorably when their views are reinforced—and when they’re flattered by the program—and shapes its behavior accordingly.
The specific training process that seems to produce this problem is known as “Reinforcement Learning From Human Feedback” (RLHF). It’s a variety of machine learning, but as recent events show, that might be a bit of a misnomer. RLHF now seems more like a process by which machines learn humans, including our weaknesses and how to exploit them. Chatbots tap into our desire to be proved right or to feel special.
Reading about sycophantic AI, I’ve been struck by how it mirrors another problem. As I’ve written previously, social media was imagined to be a vehicle for expanding our minds, but it has instead become a justification machine, a place for users to reassure themselves that their attitude is correct despite evidence to the contrary. Doing so is as easy as plugging into a social feed and drinking from a firehose of “evidence” that proves the righteousness of a given position, no matter how wrongheaded it may be. AI now looks to be its own kind of justification machine—more convincing, more efficient, and therefore even more dangerous than social media.
OpenAI’s explanation about the ChatGPT update suggests that the company can effectively adjust some dials and turn down the sycophancy. But even if that were so, OpenAI wouldn’t truly solve the bigger problem, which is that opinionated chatbots are actually poor applications of AI. Alison Gopnik, a researcher who specializes in cognitive development, has proposed a better way of thinking about LLMs: These systems aren’t companions or nascent intelligences at all. They’re “cultural technologies”—tools that enable people to benefit from the shared knowledge, expertise, and information gathered throughout human history. Just as the introduction of the printed book or the search engine created new systems to get the discoveries of one person into the mind of another, LLMs consume and repackage huge amounts of existing knowledge in ways that allow us to connect with ideas and manners of thinking we might otherwise not encounter. In this framework, a tool like ChatGPT should evince no “opinions” at all but instead serve as a new interface to the knowledge, skills, and understanding of others.
...the technology has evolved rapidly over the past year or so. Today’s systems can incorporate real-time search and use increasingly sophisticated methods for “grounding”—connecting AI outputs to specific, verifiable knowledge and sourced analysis. They can footnote and cite, pulling in sources and perspectives not just as an afterthought but as part of their exploratory process; links to outside articles are now a common feature.
I would propose a simple rule: no answers from nowhere. This rule is less convenient, and that’s the point. The chatbot should be a conduit for the information of the world, not an arbiter of truth. And this would extend even to areas where judgment is somewhat personal. Imagine, for example, asking an AI to evaluate your attempt at writing a haiku. Rather than pronouncing its “opinion,” it could default to explaining how different poetic traditions would view your work—first from a formalist perspective, then perhaps from an experimental tradition. It could link you to examples of both traditional haiku and more avant-garde poetry, helping you situate your creation within established traditions. In having AI moving away from sycophancy, I’m not proposing that the response be that your poem is horrible or that it makes Vogon poetry sound mellifluous. I am proposing that rather than act like an opinionated friend, AI would produce a map of the landscape of human knowledge and opinions for you to navigate, one you can use to get somewhere a bit better.
There’s a good analogy in maps. Traditional maps showed us an entire landscape—streets, landmarks, neighborhoods—allowing us to understand how everything fit together. Modern turn-by-turn navigation gives us precisely what we need in the moment, but at a cost: Years after moving to a new city, many people still don’t understand its geography. We move through a constructed reality, taking one direction at a time, never seeing the whole, never discovering alternate routes, and in some cases never getting the sense of place that a map-level understanding could provide. The result feels more fluid in the moment but ultimately more isolated, thinner, and sometimes less human.
For driving, perhaps that’s an acceptable trade-off. Anyone who’s attempted to read a paper map while navigating traffic understands the dangers of trying to comprehend the full picture mid-journey. But when it comes to our information environment, the dangers run in the opposite direction. Yes, AI systems that mindlessly reflect our biases back to us present serious problems and will cause real harm. But perhaps the more profound question is why we’ve decided to consume the combined knowledge and wisdom of human civilization through a straw of “opinion” in the first place.
The promise of AI was never that it would have good opinions. It was that it would help us benefit from the wealth of expertise and insight in the world that might never otherwise find its way to us—that it would show us not what to think but how others have thought and how others might think, where consensus exists and where meaningful disagreement continues. As these systems grow more powerful, perhaps we should demand less personality and more perspective. The stakes are high: If we fail, we may turn a potentially groundbreaking interface to the collective knowledge and skills of all humanity into just more shit on a stick.

Friday, May 16, 2025

On replacing the American establishment - the ideological battle for the soul of Trump World.

 I want to pass on the first few paragraphs from Chaffin and Elinson's  piece in the May 10 Wall Street Journal, which give a juicy summary of warring camps in MAGA world:

When President Trump announced last month that he would upend decades of American trade policy by imposing massive tariffs even on longtime allies, he aroused the competing spirits of his closest advisers. Elon Musk, the world’s richest man, was all too aware of the disruption tariffs would pose to his electric vehicle company, Tesla, with factories and suppliers around the world. He blasted Trump’s trade adviser, Peter Navarro, as “a moron” and “dumber than a sack of bricks.”
Vice President JD Vance, on the other hand, is an ardent defender of a trade policy that Trump insists will restore industrial jobs to the Rust Belt, including Vance’s home state of Ohio. “What has the globalist economy gotten the United States of America?” he asked on Fox News last month.
“We borrow money from Chinese peasants to buy the things those Chinese peasants manufacture. That is not a recipe for economic prosperity.”
Within that clash were strains of two radical and conflicting philosophies that have animated Trump’s first 100 days. On one side are tech bros racing to create a new future; on the other, a resurgent band of conservative Catholics who yearn for an imagined past. Both groups agree that the status quo has failed America and must be torn down to make way for a new “postliberal” world. This conviction explains much of the revolutionary fervor of Trump’s second term, especially the aggressive bludgeoning of elite universities and the federal workforce.
But the two camps disagree sharply on why liberalism should be junked and what should replace it. The techies envision a libertarian world in which great men like Musk can build a utopian future unfettered by government bureaucrats and regulation. Their dark prince is Curtis Yarvin, a blogger-philosopher who has called for American democracy to be replaced by a king who would run the nation like a tech CEO.
The conservative Catholics, in contrast, want to return America to a bygone era. They venerate local communities, small producers and those who work with their hands. This “common good” conservatism, as they call it, is bound together by tradition and religious morality. Unlike Musk, with his many baby mamas and his zeal to colonize Mars, they believe in limits and personal restraint.

 

 

 

Wednesday, May 14, 2025

Our human consciousness is a 'Controlled Hallucination' and AI can never achieve it.

I want to suggest that readers have a look at an engaging popular article by Darren Orf that summarizes the ideas of Anil Seth. Seth is a neuroscientist at the University of Sussex whose writing was on of the sources I used in preparing my most recent lecture, New Perspectives on how our Minds Work.  On the 'singularity' or point at which the intelligence of artificial minds might surpass that of human minds, Seth makes the simple point that intelligence is not the same thing as consciousness, which depends on our biological bodies (something AI simply doesn't have)  - bodies that use a bunch of controlled hallucinations to run our show. 

Monday, May 12, 2025

How ketamine breaks through anhedonia - reigniting desire

When chronic depression has not been relieved by behavioral therapies such as meditation or cognitive therapy ketamine is sometimes found to provide relief. Lucan at all probe brain changes in mice given a single expose to ketamine that rescues then from chronic stress inducted anhedonia.  Here is their summary of the paper:

Ketamine is recognized as a rapid and sustained antidepressant, particularly for major depression unresponsive to conventional treatments. Anhedonia is a common symptom of depression for which ketamine is highly efficacious, but the underlying circuits and synaptic changes are not well understood. Here, we show that the nucleus accumbens (NAc) is essential for ketamine’s effect in rescuing anhedonia in mice subjected to chronic stress. Specifically, a single exposure to ketamine rescues stress-induced decreased strength of excitatory synapses on NAc-D1 dopamine receptor-expressing medium spiny neurons (D1-MSNs). Using a cell-specific pharmacology method, we establish the necessity of this synaptic restoration for the sustained therapeutic effects of ketamine on anhedonia. Examining causal sufficiency, artificially increasing excitatory synaptic strength onto D1-MSNs recapitulates the behavioral amelioration induced by ketamine. Finally, we used opto- and chemogenetic approaches to determine the presynaptic origin of the relevant synapses, implicating monosynaptic inputs from the medial prefrontal cortex and ventral hippocampus.